Goto

Collaborating Authors

 Strength Medium


Integrated Pipeline for Coronary Angiography With Automated Lesion Profiling, Virtual Stenting, and 100-Vessel FFR Validation

Kopanitsa, Georgy, Metsker, Oleg, Yakovlev, Alexey

arXiv.org Artificial Intelligence

Coronary angiography is the main tool for assessing coronary artery disease, but visual grading of stenosis is variable and only moderately related to ischaemia. Wire based fractional flow reserve (FFR) improves lesion selection but is not used systematically. Angiography derived indices such as quantitative flow ratio (QFR) offer wire free physiology, yet many tools are workflow intensive and separate from automated anatomy analysis and virtual PCI planning. We developed AngioAI-QFR, an end to end angiography only pipeline combining deep learning stenosis detection, lumen segmentation, centreline and diameter extraction, per millimetre Relative Flow Capacity profiling, and virtual stenting with automatic recomputation of angiography derived QFR. The system was evaluated in 100 consecutive vessels with invasive FFR as reference. Primary endpoints were agreement with FFR (correlation, mean absolute error) and diagnostic performance for FFR <= 0.80. On held out frames, stenosis detection achieved precision 0.97 and lumen segmentation Dice 0.78. Across 100 vessels, AngioAI-QFR correlated strongly with FFR (r = 0.89, MAE 0.045). The AUC for detecting FFR <= 0.80 was 0.93, with sensitivity 0.88 and specificity 0.86. The pipeline completed fully automatically in 93 percent of vessels, with median time to result 41 s. RFC profiling distinguished focal from diffuse capacity loss, and virtual stenting predicted larger QFR gain in focal than in diffuse disease. AngioAI-QFR provides a practical, near real time pipeline that unifies computer vision, functional profiling, and virtual PCI with automated angiography derived physiology.


A survey of using EHR as real-world evidence for discovering and validating new drug indications

Talukdar, Nabasmita, Zhang, Xiaodan, Paithankar, Shreya, Wang, Hui, Chen, Bin

arXiv.org Artificial Intelligence

Electronic Health Records (EHRs) have been increasingly used as real-world evidence (RWE) to support the discovery and validation of new drug indications. This paper surveys current approaches to EHR-based drug repurposing, covering data sources, processing methodologies, and representation techniques. It discusses study designs and statistical frameworks for evaluating drug efficacy. Key challenges in validation are discussed, with emphasis on the role of large language models (LLMs) and target trial emulation. By synthesizing recent developments and methodological advances, this work provides a foundational resource for researchers aiming to translate real-world data into actionable drug-repurposing evidence.


BrainRotViT: Transformer-ResNet Hybrid for Explainable Modeling of Brain Aging from 3D sMRI

Jalal, Wasif, Rahman, Md Nafiu, Rahman, Atif Hasan, Rahman, M. Sohel

arXiv.org Artificial Intelligence

The human brain undergoes continuous transformations across the lifespan, representing a natural component of aging that does not inherently signal pathological conditions [1]. Neurodegenerative disorders such as dementia can compromise the brain structure and accelerate aging processes. Understanding and characterizing healthy brain aging patterns therefore becomes essential for distinguishing normal aging from pathological neurodegeneration, potentially enabling earlier detection of neurodegenerative diseases. The Brain Age-Gap (BAG), i.e. the discrepancy between predicted brain age and chronological age, has emerged as a robust biomarker that captures pathological brain processes and offers insights into the rate at which an individual's brain ages in comparison to others in the population [2, 3]. It is not only associated with various neurological disorders, such as Alzheimer's disease, cognitive impairment, and Autism Spectrum Disorder, but also serves as an indicator of all-cause mortality [4, 5, 6, 7, 8] Brain age estimation has been approached through both conventional and machine learning techniques, analyzing either the whole brain, specific regions, or localized patches [9, 10, 11]. One particular study presented a method using T1-weighted MRI to predict age through region-level and voxel-level metrics [12]. Regression-based machine learning has shown promise for the brain age prediction, with kernel regression applied to whole-brain MRI across diverse age ranges [13]. Various algorithms including Support Vector Regression and Binary Decision Trees have been compared for their brain age prediction capabilities [14]. Additional regression techniques such as Relevance Vector Regression, Twin Support Vector Regression, and Gaussian Process Regression have been explored across different imaging modalities for age estimation and mortality prediction [11, 15, 16, 17].


A Method for Characterizing Disease Progression from Acute Kidney Injury to Chronic Kidney Disease

Fang, Yilu, Nestor, Jordan G., Ta, Casey N., Kneifati-Hayek, Jerard Z., Weng, Chunhua

arXiv.org Artificial Intelligence

Patients with acute kidney injury (AKI) are at high risk of developing chronic kidney disease (CKD), but identifying those at greatest risk remains challenging. We used electronic health record (EHR) data to dynamically track AKI patients' clinical evolution and characterize AKI-to-CKD progression. Post-AKI clinical states were identified by clustering patient vectors derived from longitudinal medical codes and creatinine measurements. Transition probabilities between states and progression to CKD were estimated using multi-state modeling. After identifying common post-AKI trajectories, CKD risk factors in AKI subpopulations were identified through survival analysis. Of 20,699 patients with AKI at admission, 3,491 (17%) developed CKD. We identified fifteen distinct post-AKI states, each with different probabilities of CKD development. Most patients (75%, n=15,607) remained in a single state or made only one transition during the study period. Both established (e.g., AKI severity, diabetes, hypertension, heart failure, liver disease) and novel CKD risk factors, with their impact varying across these clinical states. This study demonstrates a data-driven approach for identifying high-risk AKI patients, supporting the development of decision-support tools for early CKD detection and intervention.


Forecasting Spoken Language Development in Children with Cochlear Implants Using Preimplantation MRI

Wang, Yanlin, Yuan, Di, Dettman, Shani, Choo, Dawn, Xu, Emily Shimeng, Thomas, Denise, Ryan, Maura E, Wong, Patrick C M, Young, Nancy M

arXiv.org Artificial Intelligence

Cochlear implants (CI) significantly improve spoken language in children with severe-to-profound sensorineural hearing loss (SNHL), yet outcomes remain more variable than in children with normal hearing. This variability cannot be reliably predicted for individual children using age at implantation or residual hearing. This study aims to compare the accuracy of traditional machine learning (ML) to deep transfer learning (DTL) algorithms to predict post-CI spoken language development of children with bilateral SNHL using a binary classification model of high versus low language improvers. A total of 278 implanted children enrolled from three centers. The accuracy, sensitivity and specificity of prediction models based upon brain neuroanatomic features using traditional ML and DTL learning. DTL prediction models using bilinear attention-based fusion strategy achieved: accuracy of 92.39% (95% CI, 90.70%-94.07%), sensitivity of 91.22% (95% CI, 89.98%-92.47%), specificity of 93.56% (95% CI, 90.91%-96.21%), and area under the curve (AUC) of 0.977 (95% CI, 0.969-0.986). DTL outperformed traditional ML models in all outcome measures. DTL was significantly improved by direct capture of discriminative and task-specific information that are advantages of representation learning enabled by this approach over ML. The results support the feasibility of a single DTL prediction model for language prediction of children served by CI programs worldwide.



Identification and Estimation of Joint Probabilities of Potential Outcomes in Observational Studies with Covariate Information

Neural Information Processing Systems

"sufficiency", and "necessity and sufficiency", which are important concepts In practical science, it is crucial to evaluate the likelihood of one event causing another event. For example, epidemiologists pay attention to determining the likelihood of a particular exposure being the cause of a particular disease.



AI Debate Aids Assessment of Controversial Claims

Rahman, Salman, Issaka, Sheriff, Suvarna, Ashima, Liu, Genglin, Shiffer, James, Lee, Jaeyoung, Parvez, Md Rizwan, Palangi, Hamid, Feng, Shi, Peng, Nanyun, Choi, Yejin, Michael, Julian, Jiang, Liwei, Gabriel, Saadia

arXiv.org Artificial Intelligence

As AI grows more powerful, it will increasingly shape how we understand the world. But with this influence comes the risk of amplifying misinformation and deepening social divides-especially on consequential topics where factual accuracy directly impacts well-being. Scalable Oversight aims to ensure AI systems remain truthful even when their capabilities exceed those of their evaluators. Yet when humans serve as evaluators, their own beliefs and biases can impair judgment. We study whether AI debate can guide biased judges toward the truth by having two AI systems debate opposing sides of controversial factuality claims on COVID-19 and climate change where people hold strong prior beliefs. We conduct two studies. Study I recruits human judges with either mainstream or skeptical beliefs who evaluate claims through two protocols: debate (interaction with two AI advisors arguing opposing sides) or consultancy (interaction with a single AI advisor). Study II uses AI judges with and without human-like personas to evaluate the same protocols. In Study I, debate consistently improves human judgment accuracy and confidence calibration, outperforming consultancy by 4-10% across COVID-19 and climate change claims. The improvement is most significant for judges with mainstream beliefs (up to +15.2% accuracy on COVID-19 claims), though debate also helps skeptical judges who initially misjudge claims move toward accurate views (+4.7% accuracy). In Study II, AI judges with human-like personas achieve even higher accuracy (78.5%) than human judges (70.1%) and default AI judges without personas (69.8%), suggesting their potential for supervising frontier AI models. These findings highlight AI debate as a promising path toward scalable, bias-resilient oversight in contested domains.